The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Image-text retrieval (ITR) is a challenging task in the field of multimodal information processing due to the semantic gap between different modalities. In recent years, researchers have made great progress in exploring the accurate alignment between image and text. However, existing works mainly focus on the fine-grained alignment between image regions and sentence fragments, which ignores the guiding significance of context background information. Actually, integrating the local fine-grained information and global context background information can provide more semantic clues for retrieval. In this paper, we propose a novel Hierarchical Graph Alignment Network (HGAN) for image-text retrieval. First, to capture the comprehensive multimodal features, we construct the feature graphs for the image and text modality respectively. Then, a multi-granularity shared space is established with a designed Multi-granularity Feature Aggregation and Rearrangement (MFAR) module, which enhances the semantic corresponding relations between the local and global information, and obtains more accurate feature representations for the image and text modalities. Finally, the ultimate image and text features are further refined through three-level similarity functions to achieve the hierarchical alignment. To justify the proposed model, we perform extensive experiments on MS-COCO and Flickr30K datasets. Experimental results show that the proposed HGAN outperforms the state-of-the-art methods on both datasets, which demonstrates the effectiveness and superiority of our model.
translated by 谷歌翻译
To balance the annotation labor and the granularity of supervision, single-frame annotation has been introduced in temporal action localization. It provides a rough temporal location for an action but implicitly overstates the supervision from the annotated-frame during training, leading to the confusion between actions and backgrounds, i.e., action incompleteness and background false positives. To tackle the two challenges, in this work, we present the Snippet Classification model and the Dilation-Erosion module. In the Dilation-Erosion module, we expand the potential action segments with a loose criterion to alleviate the problem of action incompleteness and then remove the background from the potential action segments to alleviate the problem of action incompleteness. Relying on the single-frame annotation and the output of the snippet classification, the Dilation-Erosion module mines pseudo snippet-level ground-truth, hard backgrounds and evident backgrounds, which in turn further trains the Snippet Classification model. It forms a cyclic dependency. Furthermore, we propose a new embedding loss to aggregate the features of action instances with the same label and separate the features of actions from backgrounds. Experiments on THUMOS14 and ActivityNet 1.2 validate the effectiveness of the proposed method. Code has been made publicly available (https://github.com/LingJun123/single-frame-TAL).
translated by 谷歌翻译
Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM.
translated by 谷歌翻译
本文提出了一种有效且安全的方法,可以避免基于LiDAR的静态和动态障碍。首先,点云用于生成实时的本地网格映射以进行障碍物检测。然后,障碍物由DBSCAN算法聚集,并用最小边界椭圆(MBE)包围。此外,进行数据关联是为了使每个MBE与当前帧中的障碍匹配。考虑到MBE作为观察,Kalman滤波器(KF)用于估计和预测障碍物的运动状态。通过这种方式,可以将远期时间域中每个障碍物的轨迹作为一组椭圆化。由于MBE的不确定性,参数化椭圆形的半肢和半尺寸轴被扩展以确保安全性。我们扩展了传统的控制屏障功能(CBF),并提出动态控制屏障功能(D-CBF)。我们将D-CBF与模型预测控制(MPC)结合起来,以实施安全至关重要的动态障碍。进行了模拟和实际场景中的实验,以验证我们算法的有效性。源代码发布以供社区参考。
translated by 谷歌翻译
社会过程的持续数字化转化为时间序列数据的扩散,这些数据涵盖了诸如欺诈检测,入侵检测和能量管理等应用,在这种应用程序中,异常检测通常对于启用可靠性和安全性至关重要。许多最近的研究针对时间序列数据的异常检测。实际上,时间序列异常检测的特征是不同的数据,方法和评估策略,现有研究中的比较仅考虑了这种多样性的一部分,这使得很难为特定问题设置选择最佳方法。为了解决这一缺点,我们介绍了有关数据,方法和评估策略的分类法,并使用分类法提供了无监督时间序列检测的全面概述,并系统地评估和比较了最先进的传统以及深度学习技术。在使用九个公开可用数据集的实证研究中,我们将最常用的性能评估指标应用于公平实施标准下的典型方法。根据分类法提供的结构化,我们报告了经验研究,并以比较表的形式提供指南,以选择最适合特定应用程序设置的方法。最后,我们为这个动态领域提出了研究方向。
translated by 谷歌翻译
作为一种概率建模技术,基于流的模型在无损压缩\ cite {idf,idf ++,lbb,ivpf,iflow}的领域表现出了巨大的潜力。与其他深层生成模型(例如自动回应,VAE)\ cite {bitswap,hilloc,pixelcnn ++,pixelsnail},这些模型明确地模拟了数据分布概率,因此基于流的模型的性能更好,因为它们的出色概率密度估计和满意度的概率和满意度的概率。在基于流量的模型中,多尺度体系结构提供了从浅层到输出层的快捷方式,从而大大降低了计算复杂性并避免添加更多层时性能降解。这对于构建基于先进的基于流动的可学习射击映射至关重要。此外,实用压缩任务中模型设计的轻量级要求表明,具有多尺度体系结构的流量在编码复杂性和压缩效率之间取得了最佳的权衡。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
现有的DERANE方法主要集中于单个输入图像。只有单个输入图像,很难准确检测到雨条,去除雨条并恢复无雨图像。与单个2D图像相比,光场图像(LFI)通过通过元素摄像机记录每个事件射线的方向和位置,嵌入了广泛的3D结构和纹理信息,该镜头已成为计算机中的流行设备视觉和图形研究社区。在本文中,我们提出了一个新颖的网络4D-MGP-SRRNET,以从LFI中删除雨条。我们的方法将大雨LFI的所有子视图作为输入。为了充分利用LFI,我们采用4D卷积层来构建拟议的雨牛排清除网络,以同时处理LFI的所有子视图。在拟议的网络中,提出了带有新颖的多尺度自引导高斯工艺(MSGP)模块的雨水检测模型MGPDNET,以检测输入LFI的所有子视图中的雨条。引入了半监督的学习,以通过对虚拟世界LFI和现实世界中的LFI进行多个尺度上的虚拟世界LFI和现实世界中的LFI来准确检测雨季,这是通过计算现实世界中雨水条纹的伪地面真相。然后,所有减去预测的雨条的子视图都将馈送到4D残差模型中,以估计深度图。最后,所有子视图与相应的雨条和从估计的深度图转换的相应雨条和雾图都馈送到基于对抗性复发性神经网络的雨天LFI恢复模型,以逐步消除雨水条纹并恢复无雨的LFI LFI LFI。 。对合成LFI和现实世界LFI进行的广泛的定量和定性评估证明了我们提出的方法的有效性。
translated by 谷歌翻译
云计算技术的最新趋势有效地提高了视觉检查的应用。但是,大多数可用系统以人类的方式工作,无法为在线应用提供长期支持。为了向前迈出一步,本文概述了一个名为SSAA的自动注释系统,以一种自学的学习方式工作,以在制造自动化场景中不断进行在线视觉检查。 SSAA受益于自我监督的学习,有效地为整个生命周期建立了视觉检查应用程序。在早期阶段,仅使用无异常数据,采用了无监督的算法来处理借口任务并为以下数据生成粗标签。然后,对监督算法进行了下游任务的培训。借助用户友好的基于Web的接口,SSAA非常方便地集成和部署两个无监督和监督算法。到目前为止,SSAA系统已用于一些现实生活中的工业应用。
translated by 谷歌翻译